Back to top

The Week in AI: "All incumbents are gonna get nuked."

Read MoreHide Full Article

Welcome back to The Week in AI. I'm Kevin Cook, your field guide and storyteller for the fascinating arena of artificial intelligence.

On Friday, my colleague Ethan Feller and I ran through a dozen developments that are transforming the economy right before our eyes. Here were 7 of the highlights...

1) Jensen at NVIDIA GTC Paris: "We are going to sell hundreds of billions worth of GB200/300."

CEO Jensen Huang has forecast spending on AI-enabled data centers will double to $2 trillion over the next four to five years. As Grace Blackwell systems deploy, with 208 billion transistors per GPU -- or nearly 15 trillion per GB200 NVL72 rack system -- NVIDIA (NVDA - Free Report) engineers are building the roadmap for Rubin and Feynman systems with likely orders of magnitude greater power.

This is something I've talked about repeatedly for the past year: Wall Street analysts and investors are vastly underestimating the potential of the AI economy and the upgrades in infrastructure that need to occur to support self-driving cars, humanoid robots, and other autonomous machines.

And this doesn't include sovereign nation-states that need to build their own AI infrastructure for security and growth.

If you ever need clarity about the AI Revolution, or just to recalibrate your expectations and convictions, there is one place you need to visit: the NVIDIA Newsroom -- especially around a GPU Tech Conference (GTC). (I show you where in the video.)

For last week's Paris GTC, they rolled out 6 press releases and 19 blogs covering as many new innovations and partnerships across industry, enterprise, science and healthcare.

Nobody Wanted AI GPUs in 2016

Jensen also retells the story of the first DGX-1 in 2016. It was the mini supercomputer about the size of a college dorm fridge and it held 8 Volta GPUs with 21 billion transistors each.

And nobody wanted it. Except a little startup called OpenAI.

I like to use this story as an example of how NVIDIA has been in a very unique position ever since. They don't have to find "product-market fit" like most companies. Instead, they have been inventing a stack that developers didn't know they needed.

Get the whole story in the replay of last Friday's The Week in AI: The Reasoning Wars, Sam's Love Letter, Zuck's Land Grab.

Even if you don't have time for the 60-minute replay, at least do a quick scroll of the comments where I post all the relevant links to the topics we discussed.

With over 25 links, you are guaranteed to find something that answers your top questions about the AI revolution!

2) The New Civil War In AI: Not Safety, But Efficacy

There are many exciting debates going on in "the revolution" right now. A recent hot conflict is over whether or not the LLMs (large language models) are doing real reasoning, and even thinking.

This one heated up after Apple (AAPL - Free Report) researchers released their paper "The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models."

We are amazed by the research, writing, pattern-finding and puzzle-solving of these models. But Apple researchers found some limitations where the models "give up" on large problems without enough context.

And it's worth pondering if they are simply "token prediction" machines that eventually get wrapped around their own axles.

I've experienced this with some of the "vibe-coding" app developer tools like Replit and Bolt.

But then other analysts and papers quickly responded and surfaced with the "limitations" of the Apple research, suggesting that the imposed cost expenditure limits imposed were the defining factor in the models giving up.

One of the rebuttal papers was titled "The Illusion of the Illusion of Thinking." Again, all these links are in the comments section of The Week in AI.

3) Google Offers Buyouts: AI Headcount Crunch Beginning?

My third topic was once again about the employment impact from generative AI and agentic AI being adopted in corporations.

I ran a query on ChatGPT for the "top 100 jobs most likely to be disrupted" in the next 3 years. You can find the link in the comments of the X Space.

Another tangible angle on job displacement was the revolutionary ad during the NBA finals by the prediction market platform Kalshi. It was created using the new Veo 3 graphics creator from Google by a filmmaker named PJ Ace.

Ethan and I discussed how this innovation is certain to disrupt advertising, marketing, and film as the machines can do in minutes what it used to take a team of people weeks.

And wait until you see the new Veo 3 ad from a Los Angeles dentist that is taking social media by storm. We'll talk about that in this Friday's Space.

Welcome to the Machine

But the most eye-opening news flash for me was the story on a company called Mechanize. While lots of job displacement will happen organically, this outfit is like a mercenary going after headcount.

The New York Times titled their article "This A.I. Company Wants to Take Your Job."

And here's how an X post described the piece about the startup that wants to automate white-collar work "as fast as possible"...

"Mechanize wants to abolish all jobs. They make no secret of this. They are developing an AI program that is extremely promising and is being financed by everyone from Google to Stripe."

Then there is Anthropic co-founder Ben Mann saying we'll know AI is transformative when it passes the "Economic Turing Test:

Give an AI agent a job for a month. Let the hiring manager choose: human or machine? When they pick the machine more often than not, we've crossed the threshold."

I have several posts in the comments of the "The Week in AI" X Space on the employment wars. Plus, just about every post is from a particular source of AI insight or expertise whose account you should be following on X.

4) Marc Andreessen: "All incumbents are gonna get nuked. Everything gets rebuilt."

Translation: AI isn't an economic upgrade. It's a total reset.

Which brings me to my favorite part of our Friday X Space...

Cooker's RANT of the WEEK: "The Magical AI Transformation Won't Be So Gentle." Here I take the other side of Sam Altman's blog post from last week titled "The Gentle Singularity."

I call it his "love letter" not to make fun of him, but to highlight his optimism in the face of brewing storms.

A few weeks ago it was Anthropic CEO Dario Amodei warning us about the rapid disruption of work and its impacts on citizens and families, not just the economy. Then the old wise-man of AI, Geoffrey Hinton, shared these sentiments in a recent interview...

The best-case future is a "symbiosis between people and AI" -- where machines handle the mundane, and humans live more interesting lives.

But in the kind of society we have now, he warns, AI won't free most people. It will concentrate power, and as massive productivity increases create joblessness, it will mostly benefit the rich.

This sober view instantly made me think of the 2016 book by Yuval Noah Harari Homo Deus in which the historian described how technology usually gets concentrated in the hands of the rich and powerful. It's just how economics works, no matter the political flavor.

In this way, AI can move quickly beyond issues of personal safety, to those of economic security. In the X Space replay and the comments below it, I discuss the implications of "post-labor economics" as well as share more expert resources on these topics.

Be sure to catch the replay of The Week in AI to hear my sense of the "not-so-gentle" transition we are headed into.

5) Apple WWDC: The Non-Event of the Week in AI

For what to expect (or not) from Apple in AI innovation, I always turn to Robert Scoble on X @Scobleizer. Here were some of his summary posts...

Cynical take on Apple's WWDC: just doing things Microsoft did back in 2003. Liquid glass. Menus on tablets.

Dark take on it: it's way behind in AI, and didn't demonstrate any attempt to catch up.

Light take: Lots of new AI features, like your phone will wait on hold for you now.

Hopeful take: the new design joins Apple Vision Pro into its ecosystem, showing that the Apple Vision Pro is the future of Apple.

Scoble adds: I really hate the recorded product demos and the old people showing new features and attempting to be "hip."

On a more Apple-positive note, Scoble is looking forward to the next devices which should be coming in the AR space...

Later this year both Apple and Google are introducing heavyweight category wearables. Lighter than the first Vision Pro.

We will judge them by who has the best AI inside.

That is more important than resolution.

Google, today, looks like it is way ahead and pulling further away because this is a game of exponents.

I will buy both anyway. :-)

(end of @Scobleizer rants)

Many experts are sensing that Alphabet (GOOGL - Free Report) is "firing on all cylinders across AI" as we've discussed previously. From Gemini 2.5 Pro and the astonishing new Veo 3 to building AI capabilities with their own with TPUs (instead of relying on NVIDIA GPUs), they're the only vertically-integrated player across all realms of tech.

Google will probably also figure out the shift from classic search to generative search, as Daniel Newman of the Futurum technology research group says. Reports of Google's demise have been greatly exaggerated according to @DanielNewmanUV and I wish I was listening before I sold my shares on the last "search is dead" scare.

6) Zuck Splashes the Pot with $14.3 Billion

Meta Platforms (META - Free Report) plunked down that amount for only 49% of a private company called Scale AI. But the price tag made it the biggest pure-AI acquisition, following OpenAI's $6 billion purchase of Jony Ive's company.

Just like Sam wasn't waiting around to find out what AI-native device Apple will build, so too Zuck isn't waiting around for permission to have access to the premier company in the data supply chain -- what some are calling the oil refinery of the AI economy.

What does that mean? Well if you think about data as various grades of crude oil, it needs to be cleaned and prepped in a number of ways before it can be "mined and modeled" for quality results.

That's where Scale AI comes in with data prep and labeling because major AI models need structured and labeled training data to generate knowledge tokens, insights, and deep learning.

Scale AI is a San Francisco-based artificial intelligence company founded in 2016 by Alexandr Wang and Lucy Guo. The company specializes in providing high-quality data labeling, annotation, and model evaluation services that are essential for training advanced AI models, including large language models (LLMs) and generative AI systems.

Scale AI is known for its robust data engine, which powers AI development for leading tech firms, government agencies, and startups worldwide. Its research division, the Safety, Evaluation and Alignment Lab (SEAL), focuses on evaluating and aligning AI models for safety and reliability

7) AMD Unveils AI Server Rack, Sam on Stage with Lisa

I am still shaking my head at all the stuff that happened last week! As if all of the above wasn't enough, Advanced Micro Devices (AMD - Free Report) held its annual Advancing AI conference last Thursday with a product roadmap for hyperscale inferencing that caught investor attention.

In addition to leaps forward in performance for the existing Instinct MI350 Series GPU systems, AMD CEO Lisa Su unveiled the Helios AI Rack-scale architecture supporting up to 72 MI400 GPUs, with 432GB of HBM4 memory per GPU and 19.6 TB/sec bandwidth. Available in 2026, this is clearly an answer to NVIDIA's GB200/300 series rack systems.

AI Market Growth: CEO Lisa Su projected an 80% increase in AI inference demand by 2026, driven by the rapid adoption and expansion of AI applications in enterprise and cloud environments.

Roadmap: AMD reaffirmed its commitment to an annual cadence of AI chip releases, with the MI400 and MI450 series already in development and expected to challenge Nvidia’s flagship offerings in 2026 and beyond.

And then Sam Altman showed up during Lisa's keynote. Since he clearly can't get enough compute or GPUs, he's as tight with Lisa as he is with Jensen.

Lisa welcomed the founder and CEO of OpenAI as a key design partner for AMD's upcoming MI450 GPU who will help shape the next generation of AMD's AI hardware. OpenAI will use AMD GPUs and Helios servers for advanced AI workloads, including ChatGPT.

And AMD's other happy customers continue to come back for more with Meta deploying AMD Instinct MI300X GPUs for Llama 3/4 inference and collaborating on future MI350/MI400 platforms.

Meanwhile Microsoft Azure runs proprietary and open-source models on AMD Instinct MI300X GPUs in production and Oracle Cloud Infrastructure will deploy zettascale AI clusters with up to 131,072 MI355X GPUs, offering massive AI compute capacity to customers.

This event made AMD shares a clear buy last week -- and this week if you can still grab some under $130!

OLD RANT: The Fundamental Difference

Finally, did you hear what another OpenAI co-founder said at the University of Toronto commencement address? Ilya Sutskever, the OpenAI architect and deep learning pioneer who in 2024 started his own model firm, Safe Superintelligence, spoke these words to the new grads...

"The day will come when AI will do all the things we can do. The reason is the brain is a biological computer, so why can't the digital computer do the same things?

"It's funny that we are debating if AI can truly think or give the illusion of thinking, as if our biological brain is superior or fundamentally different from a digital brain."

I had to dig out my old rant about the fundamental difference(s) between human brains and computer "thinking." If you haven't heard me on this, you owe it to yourself so you can easily explain the differences to other "intelligence experts" telling you how consciousness works.

Bottom line: To stay informed in AI, listen to The Week in AI replay, or just go to that post to see all the links and sources. And be sure to follow me on X @KevinBCook so you see the announcement for the new live Space every Friday.
 

Published in